Flowsheet simulation and optimization supported by machine learning methods
نویسندگان
چکیده
منابع مشابه
Stochastic Methods For Optimization and Machine Learning
In this project a stochastic method for general purpose optimization and machine learning is described. The method is derived from basic information-theoretic principles and generalizes the popular Cross Entropy method. The effectiveness of the method as a tool for statistical modeling and Monte Carlo simulation is demonstrated with an application to the problems of density estimation and data ...
متن کاملOptimization Methods for Large-Scale Machine Learning
This paper provides a review and commentary on the past, present, and future of numerical optimization algorithms in the context of machine learning applications. Through case studies on text classification and the training of deep neural networks, we discuss how optimization problems arise in machine learning and what makes them challenging. A major theme of our study is that large-scale machi...
متن کاملModern Probabilistic Machine Learning and Control Methods for Portfolio Optimization
Many recent theoretical developments in the field of machine learning and control have rapidly expanded its relevance to a wide variety of applications. In particular, a variety of portfolio optimization problems have recently been considered as a promising application domain for machine learning and control methods. In highly uncertain and stochastic environments, portfolio optimization can be...
متن کاملMachine Learning and Optimization
This final project attempts to show the differences of machine learning and optimization. In particular while optimization is concerned with exact solutions machine learning is concerned with generalization abilities of learners. We present examples in the areas of classification and regression where this difference is easy to observe as well as theoretical reasons of why this two areas are dif...
متن کاملAccelerated Parallel Optimization Methods for Large Scale Machine Learning
The growing amount of high dimensional data in different machine learning applications requires more efficient and scalable optimization algorithms. In this work, we consider combining two techniques, parallelism and Nesterov’s acceleration, to design faster algorithms for L1-regularized loss. We first simplify BOOM [11], a variant of gradient descent, and study it in a unified framework, which...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Chemie Ingenieur Technik
سال: 2018
ISSN: 0009-286X
DOI: 10.1002/cite.201855360